40 research outputs found
Graph Models in Information Hiding
Information hiding allows us to hide secret information into digital objects such as images without significantly distorting the objects. The object containing hidden information will be transmitted to a data receiver via a probably insecure channel. To securely transmit the object carrying hidden information, the distortion caused by data embedding should be as low as possible, which is referred to as the rate-distortion optimization problem. Many conventional methods optimize the data embedding procedure by a heuristic fashion, which may be not optimal in terms of the rate-distortion performance. In this chapter, we introduce novel approaches that use graph theory for information hiding. These graph models are general and can be used for improving the rate-distortion performance of information hiding systems. In addition to rate-distortion optimization, recent graph models used for system design of information hiding will be also reviewed. This chapter is intended as a tutorial introducing advanced graph models applied to information hiding
Ensemble Reversible Data Hiding
The conventional reversible data hiding (RDH) algorithms often consider the
host as a whole to embed a secret payload. In order to achieve satisfactory
rate-distortion performance, the secret bits are embedded into the noise-like
component of the host such as prediction errors. From the rate-distortion
optimization view, it may be not optimal since the data embedding units use the
identical parameters. This motivates us to present a segmented data embedding
strategy for efficient RDH in this paper, in which the raw host could be
partitioned into multiple subhosts such that each one can freely optimize and
use the data embedding parameters. Moreover, it enables us to apply different
RDH algorithms within different subhosts, which is defined as ensemble. Notice
that, the ensemble defined here is different from that in machine learning.
Accordingly, the conventional operation corresponds to a special case of the
proposed work. Since it is a general strategy, we combine some state-of-the-art
algorithms to construct a new system using the proposed embedding strategy to
evaluate the rate-distortion performance. Experimental results have shown that,
the ensemble RDH system could outperform the original versions in most cases,
which has shown the superiority and applicability.Comment: Fig. 1 was updated due to a minor erro
Watermarking Graph Neural Networks by Random Graphs
Many learning tasks require us to deal with graph data which contains rich
relational information among elements, leading increasing graph neural network
(GNN) models to be deployed in industrial products for improving the quality of
service. However, they also raise challenges to model authentication. It is
necessary to protect the ownership of the GNN models, which motivates us to
present a watermarking method to GNN models in this paper. In the proposed
method, an Erdos-Renyi (ER) random graph with random node feature vectors and
labels is randomly generated as a trigger to train the GNN to be protected
together with the normal samples. During model training, the secret watermark
is embedded into the label predictions of the ER graph nodes. During model
verification, by activating a marked GNN with the trigger ER graph, the
watermark can be reconstructed from the output to verify the ownership. Since
the ER graph was randomly generated, by feeding it to a non-marked GNN, the
label predictions of the graph nodes are random, resulting in a low false alarm
rate (of the proposed work). Experimental results have also shown that, the
performance of a marked GNN on its original task will not be impaired.
Moreover, it is robust against model compression and fine-tuning, which has
shown the superiority and applicability.Comment: https://hzwu.github.io
AWEncoder: Adversarial Watermarking Pre-trained Encoders in Contrastive Learning
As a self-supervised learning paradigm, contrastive learning has been widely
used to pre-train a powerful encoder as an effective feature extractor for
various downstream tasks. This process requires numerous unlabeled training
data and computational resources, which makes the pre-trained encoder become
valuable intellectual property of the owner. However, the lack of a priori
knowledge of downstream tasks makes it non-trivial to protect the intellectual
property of the pre-trained encoder by applying conventional watermarking
methods. To deal with this problem, in this paper, we introduce AWEncoder, an
adversarial method for watermarking the pre-trained encoder in contrastive
learning. First, as an adversarial perturbation, the watermark is generated by
enforcing the training samples to be marked to deviate respective location and
surround a randomly selected key image in the embedding space. Then, the
watermark is embedded into the pre-trained encoder by further optimizing a
joint loss function. As a result, the watermarked encoder not only performs
very well for downstream tasks, but also enables us to verify its ownership by
analyzing the discrepancy of output provided using the encoder as the backbone
under both white-box and black-box conditions. Extensive experiments
demonstrate that the proposed work enjoys pretty good effectiveness and
robustness on different contrastive learning algorithms and downstream tasks,
which has verified the superiority and applicability of the proposed work.Comment: https://scholar.google.com/citations?user=IdiF7M0AAAAJ&hl=e